自动驾驶的运动预测是一项艰巨的任务,因为复杂的驾驶场景导致静态和动态输入的异质组合。这是一个开放的问题,如何最好地表示和融合有关道路几何,车道连接,时变的交通信号状态以及动态代理的历史及其相互作用的历史。为了模拟这一不同的输入功能集,许多提出的方法旨在设计具有多种模态模块的同样复杂系统。这导致难以按严格的方式进行扩展,扩展或调整的系统以进行质量和效率。在本文中,我们介绍了Wayformer,这是一个基于注意力的运动架构,用于运动预测,简单而均匀。 Wayformer提供了一个紧凑的模型描述,该描述由基于注意力的场景编码器和解码器组成。在场景编码器中,我们研究了输入方式的早期,晚和等级融合的选择。对于每种融合类型,我们通过分解的注意力或潜在的查询关注来探索策略来折衷效率和质量。我们表明,尽管早期融合的结构简单,但不仅是情感不可知论,而且还取得了最先进的结果。
translated by 谷歌翻译
在许多实际应用(例如运动预测和3D感知)中,旋转模棱两可是理想的属性,它可以提供样本效率,更好的概括和对输入扰动的鲁棒性等好处。向量神经元(VN)是一个最近开发的框架,它通过将一维标量神经元扩展到三维“向量神经元”,提供一种简单而有效的方法来推导标准机器学习操作的旋转量表类似物。我们介绍了一种新颖的“ VN转换器”体系结构,以解决当前VN模型的几个缺点。我们的贡献是:$(i)$,我们得出了一种旋转等级的注意机制,这消除了原始矢量神经元模型所需的重型功能预处理的需求; $(ii)$我们扩展了VN框架以支持非空间属性,将这些模型的适用性扩展到现实世界数据集; $(iii)$,我们得出了一种旋转等级机制,用于多尺度减少点云的分辨率,从而大大加快了推理和训练; $(iv)$我们表明,可以使用小额折衷($ \ epsilon $ - approximate povrivariance)来获得对加速硬件的数值稳定性和培训鲁棒性的巨大改进,并且我们绑定了我们模型中对等效性侵犯的繁殖。最后,我们将VN转换器应用于3D形状分类和运动预测,并具有令人信服的结果。
translated by 谷歌翻译
近年来,行为预测模型已经激增,尤其是在自动驾驶的流行现实机器人技术应用中,代表移动代理可能未来的分布对于安全舒适的运动计划至关重要。在这些模型中,选择代表输入和输出的坐标框架的选择具有至关重要的交易折扣,这些折扣通常属于两个类别之一。以代理为中心的模型转换输入并在以代理为中心的坐标中执行推断。这些模型在场景元素之间的翻译和旋转上本质上不变,在公共排行榜上表现最好,但与代理和场景元素的数量相互缩小。以场景为中心的模型使用固定的坐标系来处理所有代理。这为他们提供了在所有代理之间共享表示形式的优势,并提供有效的摊销推理计算,该计算与代理数量线性缩放。但是,这些模型必须学习场景元素之间的翻译和旋转的不变性,并且通常以表现为中心的模型。在这项工作中,我们在概率运动预测模型之间开发知识蒸馏技术,并应用这些技术来缩小以代理为中心和以场景为中心的模型之间的性能差距。这将以场景为中心的模型性能提高了13.2%,在公共Argoverse基准中,Waymo Open Datatet的7.8%,在大型内部数据集中最多可达9.4%。这些以场景为中心的改进的模型在公共排行榜中排名很高,在繁忙场景中以代理商为中心的教师的效率高15倍。
translated by 谷歌翻译
In this work, we explore "prompt tuning," a simple yet effective mechanism for learning "soft prompts" to condition frozen language models to perform specific downstream tasks. Unlike the discrete text prompts used by GPT-3, soft prompts are learned through backpropagation and can be tuned to incorporate signals from any number of labeled examples. Our end-to-end learned approach outperforms GPT-3's few-shot learning by a large margin. More remarkably, through ablations on model size using T5, we show that prompt tuning becomes more competitive with scale: as models exceed billions of parameters, our method "closes the gap" and matches the strong performance of model tuning (where all model weights are tuned). This finding is especially relevant because large models are costly to share and serve and the ability to reuse one frozen model for multiple downstream tasks can ease this burden. Our method can be seen as a simplification of the recently proposed "prefix tuning" of Li and Liang (2021) and we provide a comparison to this and other similar approaches. Finally, we show that conditioning a frozen model with soft prompts confers benefits in robustness to domain transfer and enables efficient "prompt ensembling." * Work done as a Google AI Resident.
translated by 谷歌翻译
We present DeepWalk, a novel approach for learning latent representations of vertices in a network. These latent representations encode social relations in a continuous vector space, which is easily exploited by statistical models. Deep-Walk generalizes recent advancements in language modeling and unsupervised feature learning (or deep learning) from sequences of words to graphs.DeepWalk uses local information obtained from truncated random walks to learn latent representations by treating walks as the equivalent of sentences. We demonstrate DeepWalk's latent representations on several multi-label network classification tasks for social networks such as Blog-Catalog, Flickr, and YouTube. Our results show that Deep-Walk outperforms challenging baselines which are allowed a global view of the network, especially in the presence of missing information. DeepWalk's representations can provide F1 scores up to 10% higher than competing methods when labeled data is sparse. In some experiments, Deep-Walk's representations are able to outperform all baseline methods while using 60% less training data.DeepWalk is also scalable. It is an online learning algorithm which builds useful incremental results, and is trivially parallelizable. These qualities make it suitable for a broad class of real world applications such as network classification, and anomaly detection.
translated by 谷歌翻译
Machine learning models are typically evaluated by computing similarity with reference annotations and trained by maximizing similarity with such. Especially in the bio-medical domain, annotations are subjective and suffer from low inter- and intra-rater reliability. Since annotations only reflect the annotation entity's interpretation of the real world, this can lead to sub-optimal predictions even though the model achieves high similarity scores. Here, the theoretical concept of Peak Ground Truth (PGT) is introduced. PGT marks the point beyond which an increase in similarity with the reference annotation stops translating to better Real World Model Performance (RWMP). Additionally, a quantitative technique to approximate PGT by computing inter- and intra-rater reliability is proposed. Finally, three categories of PGT-aware strategies to evaluate and improve model performance are reviewed.
translated by 谷歌翻译
Software Defect Prediction aims at predicting which software modules are the most probable to contain defects. The idea behind this approach is to save time during the development process by helping find bugs early. Defect Prediction models are based on historical data. Specifically, one can use data collected from past software distributions, or Versions, of the same target application under analysis. Defect Prediction based on past versions is called Cross Version Defect Prediction (CVDP). Traditionally, Static Code Metrics are used to predict defects. In this work, we use the Class Dependency Network (CDN) as another predictor for defects, combined with static code metrics. CDN data contains structural information about the target application being analyzed. Usually, CDN data is analyzed using different handcrafted network measures, like Social Network metrics. Our approach uses network embedding techniques to leverage CDN information without having to build the metrics manually. In order to use the embeddings between versions, we incorporate different embedding alignment techniques. To evaluate our approach, we performed experiments on 24 software release pairs and compared it against several benchmark methods. In these experiments, we analyzed the performance of two different graph embedding techniques, three anchor selection approaches, and two alignment techniques. We also built a meta-model based on two different embeddings and achieved a statistically significant improvement in AUC of 4.7% (p < 0.002) over the baseline method.
translated by 谷歌翻译
A key component of fact verification is thevevidence retrieval, often from multiple documents. Recent approaches use dense representations and condition the retrieval of each document on the previously retrieved ones. The latter step is performed over all the documents in the collection, requiring storing their dense representations in an index, thus incurring a high memory footprint. An alternative paradigm is retrieve-and-rerank, where documents are retrieved using methods such as BM25, their sentences are reranked, and further documents are retrieved conditioned on these sentences, reducing the memory requirements. However, such approaches can be brittle as they rely on heuristics and assume hyperlinks between documents. We propose a novel retrieve-and-rerank method for multi-hop retrieval, that consists of a retriever that jointly scores documents in the knowledge source and sentences from previously retrieved documents using an autoregressive formulation and is guided by a proof system based on natural logic that dynamically terminates the retrieval process if the evidence is deemed sufficient. This method is competitive with current state-of-the-art methods on FEVER, HoVer and FEVEROUS-S, while using $5$ to $10$ times less memory than competing systems. Evaluation on an adversarial dataset indicates improved stability of our approach compared to commonly deployed threshold-based methods. Finally, the proof system helps humans predict model decisions correctly more often than using the evidence alone.
translated by 谷歌翻译
Wave propagation through nodes and links of a network forms the basis of spectral graph theory. Nevertheless, the sound emitted by nodes within the resonating chamber formed by a network are not well studied. The sound emitted by vibrations of individual nodes reflects the structure of the overall network topology but also the location of the node within the network. In this article, a sound recognition neural network is trained to infer centrality measures from the nodes' wave-forms. In addition to advancing network representation learning, sounds emitted by nodes are plausible in most cases. Auralization of the network topology may open new directions in arts, competing with network visualization.
translated by 谷歌翻译
电子健康记录(EHR)可获得的丰富纵向个体水平数据可用于检查治疗效果异质性。但是,使用EHR数据估算治疗效果提出了几个挑战,包括时变的混杂,重复和时间不一致的协变量测量,治疗分配和结果以及由于辍学导致的损失。在这里,我们开发了纵向数据(SDLD)算法的亚组发现,该算法是一种基于树的算法,用于使用纵向相互作用树算法结合使用纵向相互作用的一般数据驱动的方法,与纵向驱动的方法与纵向驱动的方法结合使用纵向相互作用,以发现具有异质治疗效果的亚组,并进行纵向研究。目标最大似然估计。我们将算法应用于EHR数据,以发现患有人免疫缺陷病毒(HIV)的人群的亚组,他们在接受非Dolutegravir抗逆转录病毒疗法(ART)接受非Dolutegravir抗逆转录病毒疗法(艺术)时的体重增加风险较高。
translated by 谷歌翻译